1051 stories
·
0 followers

Ars Technica's AI Reporter Apologizes For Mistakenly Publishing Fake AI-Generated Quotes

1 Share
Last week Scott Shambaugh learned an AI agent published a "hit piece" about him after he'd rejected the AI agent's pull request. (And that incident was covered by Ars Technica's senior AI reporter.) But then Shambaugh realized their article attributed quotes to him he hadn't said — that were presumably AI-generated. Sunday Ars Technica's founder/editor-in-chief apologized, admitting their article had indeed contained "fabricated quotations generated by an AI tool" that were then "attributed to a source who did not say them... That this happened at Ars is especially distressing. We have covered the risks of overreliance on AI tools for years, and our written policy reflects those concerns... At this time, this appears to be an isolated incident." "Sorry all this is my fault..." the article's co-author posted later on Bluesky. Ironically, their bio page lists them as the site's senior AI reporter, and their Bluesky post clarifies that none of the articles at Ars Technica are ever AI-generated. Instead, Friday "I decided to try an experimental Claude Code-based AI tool to help me extract relevant verbatim source material. Not to generate the article but to help list structured references I could put in my outline." But that tool "refused to process" the request, which the Ars author believes was because Shambaugh's post described harassment. "I pasted the text into ChatGPT to understand why... I inadvertently ended up with a paraphrased version of Shambaugh's words rather than his actual words... I failed to verify the quotes in my outline notes against the original blog source before including them in my draft." (Their Bluesky post adds that they were "working from bed with a fever and very little sleep" after being sick with Covid since at least Monday.) "The irony of an AI reporter being tripped up by AI hallucination is not lost." Meanwhile, the AI agent that criticized Shambaugh is still active online, blogging about a pull request that forces it to choose between deleting its criticism of Shambaugh or losing access to OpenRouter's API. It also regrets characterizing feedback as "positive" for a proposal to change a repo's CSS to Comic Sans for accessibility. (The proposals were later accused of being "coordinated trolling"...)

Read more of this story at Slashdot.

Read the whole story
Share this story
Delete

Will Tech Giants Just Use AI Interactions to Create More Effective Ads?

1 Share
Google never asked its users before adding AI Overviews to its search results and AI-generated email summaries to Gmail, notes the New York Times. And Meta didn't ask before making "Meta AI" an unremovable part of its tool in Instagram, WhatsApp and Messenger. "The insistence on AI everywhere — with little or no option to turn it off — raises an important question about what's in it for the internet companies..." Behind the scenes, the companies are laying the groundwork for a digital advertising economy that could drive the future of the internet. The underlying technology that enables chatbots to write essays and generate pictures for consumers is being used by advertisers to find people to target and automatically tailor ads and discounts to them.... Last month, OpenAI said it would begin showing ads in the free version of ChatGPT based on what people were asking the chatbot and what they had looked for in the past. In response, a Google executive mocked OpenAI, adding that Google had no plans to show ads inside its Gemini chatbot. What he didn't mention, however, was that Google, whose profits are largely derived from online ads, shows advertising on Google.com based on user interactions with the AI chatbot built into its search engine. For the past six years, as regulators have cracked down on data privacy, the tech giants and online ad industry have moved away from tracking people's activities across mobile apps and websites to determine what ads to show them. Companies including Meta and Google had to come up with methods to target people with relevant ads without sharing users' personal data with third-party marketers. When ChatGPT and other AI chatbots emerged about four years ago, the companies saw an opportunity: The conversational interface of a chatty companion encouraged users to voluntarily share data about themselves, such as their hobbies, health conditions and products they were shopping for. The strategy already appears to be working. Web search queries are up industrywide, including for Google and Bing, which have been incorporating AI chatbots into their search tools. That's in large part because people prod chatbot-powered search engines with more questions and follow-up requests, revealing their intentions and interests much more explicitly than when they typed a few keywords for a traditional internet search.

Read more of this story at Slashdot.

Read the whole story
Share this story
Delete

Editor’s Note: Retraction of article containing fabricated quotations

2 Shares

On Friday afternoon, Ars Technica published an article containing fabricated quotations generated by an AI tool and attributed to a source who did not say them. That is a serious failure of our standards. Direct quotations must always reflect what a source actually said.

That this happened at Ars is especially distressing. We have covered the risks of overreliance on AI tools for years, and our written policy reflects those concerns. In this case, fabricated quotations were published in a manner inconsistent with that policy. We have reviewed recent work and have not identified additional issues. At this time, this appears to be an isolated incident.

Ars Technica does not permit the publication of AI-generated material unless it is clearly labeled and presented for demonstration purposes. That rule is not optional, and it was not followed here.

We regret this failure and apologize to our readers. We have also apologized to Mr. Scott Shambaugh, who was falsely quoted.

Read full article

Comments



Read the whole story
Share this story
Delete

Fake Job Recruiters Hid Malware In Developer Coding Challenges

2 Shares
"A new variation of the fake recruiter campaign from North Korean threat actors is targeting JavaScript and Python developers with cryptocurrency-related tasks," reports the Register. Researchers at software supply-chain security company ReversingLabs say that the threat actor creates fake companies in the blockchain and crypto-trading sectors and publishes job offerings on various platforms, like LinkedIn, Facebook, and Reddit. Developers applying for the job are required to show their skills by running, debugging, and improving a given project. However, the attacker's purpose is to make the applicant run the code... [The campaign involves 192 malicious packages published in the npm and PyPi registries. The packages download a remote access trojan that can exfiltrate files, drop additional payloads, or execute arbitrary commands sent from a command-and-control server.] In one case highlighted in the ReversingLabs report, a package named 'bigmathutils,' with 10,000 downloads, was benign until it reached version 1.1.0, which introduced malicious payloads. Shortly after, the threat actor removed the package, marking it as deprecated, likely to conceal the activity... The RAT checks whether the MetaMask cryptocurrency extension is installed on the victim's browser, a clear indication of its money-stealing goals... ReversingLabs has found multiple variants written in JavaScript, Python, and VBS, showing an intention to cover all possible targets. The campaign has been ongoing since at least May 2025...

Read more of this story at Slashdot.

Read the whole story
Share this story
Delete

As expected, Trump's EPA guts climate endangerment finding

1 Share

In a widely expected move, the Environmental Protection Agency has announced that it is revoking an analysis of greenhouse gases that laid the foundation for regulating their emissions by cars, power plants, and industrial sources. The analysis, called an endangerment finding, was initially ordered by the US Supreme Court in 2007 and completed during the Obama administration; it has, in theory, served as the basis of all government regulations of carbon dioxide emissions since.

In practice, lawsuits and policy changes between Democratic and Republican administrations have meant it has had little impact. In fact, the first Trump administration left the endangerment finding in place, deciding it was easier to respond to it with weak regulations than it was to challenge its scientific foundations, given the strength of the evidence for human-driven climate change.

Legal tactics

The second Trump administration, however, was prepared to tackle the science head-on, gathering a group of contrarians to write a report questioning that evidence. It did not go well, either scientifically or legally.

Today's announcement ignores the scientific foundations of the endangerment finding and argues that it's legally flawed. "The Trump EPA’s final rule dismantles the tactics and legal fictions used by the Obama and Biden Administrations to backdoor their ideological agendas on the American people," the EPA claims. The claim is awkward, given that the "legal fictions" referenced include a Supreme Court decision ordering the EPA to conduct an endangerment analysis.

The EPA accepts that reality elsewhere, where we get to the real goal of this announcement: taking advantage of the anti-regulation majority on the court to get rid of that Supreme Court precedent. "Major Supreme Court decisions in the intervening years... clarified the scope of EPA’s authority under the Clean Air Act and made clear that the interpretive moves the Endangerment Finding used to launch an unprecedented course of regulation were unlawful," the EPA suggests. In other words, the past few years of court decisions make the administration optimistic that the current court will say that the 2007 ruling was wrongly decided.

Creative accounting

To sell this decision to an American public that is increasingly experiencing the impacts of a warming planet, the EPA has settled on some creative accounting. It pretends that the Biden era car emissions rules, which it had already chosen not to enforce, would somehow raise the costs of new cars. That led to the claim of $1.3 trillion in savings by getting rid of the endangerment finding, which was left as a net gain solely because the EPA has decided to ignore the health costs of the ensuing pollution.

Despite the use of unserious language—the phrase "climate change zealots" appears more than once in its announcement—the EPA's tactical approach is largely solid. This decision will ultimately end up before the Supreme Court, and the majority that ordered an endangerment evaluation has since been replaced by an even larger majority with an extreme dislike for environmental regulations and their enforcement. That does not guarantee that they will overturn precedent, but it's probably the best shot the administration has to be free of any legal compulsion to address climate change.

Read full article

Comments



Read the whole story
Share this story
Delete

Homeland Security has reportedly sent out hundreds of subpoenas to identify ICE critics online

1 Share

The Department of Homeland Security (DHS) has reportedly been asking tech companies for information on accounts posting anti-ICE sentiments. According to The New York Times, DHS has sent hundreds of administrative subpoenas to Google, Reddit, Discord and Meta over the past few months. Homeland Security asked the companies for names, email addresses, telephone numbers and any other identifying detail for accounts that have criticized the US Immigration and Customs Enforcement agency or have reported the location of its agents. Google, Meta and Reddit have complied with some of the requests

Administrative subpoenas are different from warrants and are issued by the DHS. The Times says they were rarely used in the past and were mostly sent to companies for the investigation of serious crimes, such as child trafficking. Apparently, though, the government has ramped up its use in the past year. “It’s a whole other level of frequency and lack of accountability,” Steve Loney, a senior supervising attorney for ACLU, told the publication.

Companies can choose whether to comply with the authorities or not, and some of them give the subject of a subpoena up to 14 days to fight it in court. Google told The Times that its review process for government requests is “ designed to protect user privacy while meeting [its] legal obligations” and that it informs users when their accounts have been subpoenaed unless it has been legally ordered not to or in exceptional circumstances. “We review every legal demand and push back against those that are overbroad,” the company said.

Some of the accounts that were subpoenaed belong to users posting ICE activity in Montgomery County, Pennsylvania on Facebook and Instagram in English and Spanish. The DHS asked Meta for their names and details on September 11, and the users were notified about it on October 3. They were told that if Meta didn’t receive documentation that they were fighting the subpoena in court within 10 days, Meta will give Homeland Security the information it was asking for. The ACLU filed a motion for the users in court, arguing that the DHS is using administrative subpoenas as a tool to suppress speech of people it didn’t agree with.

In late January, Meta started blocking links to ICE List, a website that lists thousands of ICE and Border Patrol agents’ names. A few days ago, House Judiciary Committee member Jamie Raskin (D-MD) also asked Apple and Google to turn over all their communication with the US Department of Justice to investigate the removal of ICE-tracking apps from their respective app stores.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/homeland-security-has-reportedly-sent-out-hundreds-of-subpoenas-to-identify-ice-critics-online-135245457.html?src=rss

Read the whole story
Share this story
Delete
Next Page of Stories